Avatar

Idan Habler

AI Security Researcher

AI Software and Platform

Idan Habler is an AI Security Researcher at Cisco, where he focuses on securing agentic and autonomous AI systems across enterprise environments. His expertise spans agentic threat modeling, AI red-teaming, secure tool and agent-to-agent interactions, and defense-in-depth architectures for generative AI systems. Prior to Cisco, Idan worked as an AI Security Researcher at Intuit and served in senior cyber R&D and cyber security roles within Israel’s military. An active researcher, Idan holds a Ph.D. in Software and Information Systems Engineering from Ben-Gurion University, where his research focused on cyber risk assessment and advanced threats to complex systems. Beyond his corporate role, Idan is core team member of the OWASP Securing Agentic Applications initiative, and a founding member of AIVSS. Through OWASP, he co-authors security standards and guidance including the Agent Name Service (ANS) and Agent-to-Agent Secure (A2AS) protocol, and develops practical threat modeling frameworks for multi-agent AI systems. A recognized contributor to the AI security community, Idan’s work has been published in leading research and industry venues, and he regularly collaborates with industry, academia, and open-source communities to advance secure-by-design approaches for agentic AI systems at scale.

Articles

Identifying and remediating a persistent memory compromise in Claude Code

4 min read

We recently discovered a method to compromise Claude Code’s memory and maintain persistence beyond our immediate session into every project, every session, and even after reboots. In this post, we’ll break down how we were able to poison an AI.....

Your Model’s Memory Has Been Compromised: Adversarial Hubness in RAG Systems

3 min read

Prompt injections and jailbreaks remain a major concern for AI security, and for good reason: models remain susceptible to users tricking models into doing or saying things like bypassing guardrails or leaking system prompts. But AI deployments don’t just process prompts at inference time (meaning when you are actively querying the model): they may also retrieve, rank, and synthesize external data in real time. Each of those steps is a potential adversarial entry point.